中国城乡地区建模差分应力表达可以更好地了解城市化对心理福祉的影响,在过去二十年中迅速发展的国家。本文研究了使用等级混合效应模型从329个县中超过65,000名用户在中国城乡压力的经验和表达的语言差异。我们分析了微博职位中的短语,题目主题和心理语言学的选择,提及压力,以更好地了解中国城乡社区心理压力的评价差异;然后我们将它们与盖子的大规模民意调查进行了比较。在控制社会经济和性别差异之后,我们发现农村社区倾向于表达情感和个人主题,如关系,健康和机会,而在城市地区的用户使用相对,时间和外部主题,如工作,政治和经济学。这些差异存在于对GDP和城市化的控制之外,表明在非常具体的环境中农村和城市居民之间的基本不同的生活方式,可以说是具有不同的压力来源。我们在盖洛普民意调查中找到了与城市化的身体,金融和社会健康的腐败趋势。
translated by 谷歌翻译
人们通过他们写的文本的语言风格来传达他们的意图和态度。在这项研究中,我们在整个两个镜头上调查讲述型号的Lexicon用法:人类感知和机器的重要性,因为词语在他们提供的风格线索的力量中不同。收集人类感知标签,我们策划了一个新的数据集,蜂鸟,在基准标记的样式数据集之上。我们有人群工人突出了文本中的代表词,使他们认为文本具有以下样式:礼貌,情绪,冒险性和五种情绪类型。然后,我们将这些人类词标签与来自像BERT这样的流行的微调样式分类器派生的单词重要性。我们的结果表明,伯特通常会发现与目标风格无关的内容词作为风格预测中使用的重要词语,但即使对于某些风格(例如,积极情绪和喜悦)人类和机器,人类也不会相同地察觉。已识别的单词为某些风格共享显着重叠。
translated by 谷歌翻译
社交媒体越来越多地用于大规模的人口预测,例如估计社区健康统计数据。但是,社交媒体用户通常不是预期人群的代表性样本 - “选择偏见”。在社会科学中,这种偏见通常是通过约束技术解决的,在这种偏见的情况下,根据其社会人口统计学群体的不足或过度采样,将观察结果重新恢复。然而,很少评估约束性以改善预测。在这项两部分的研究中,我们首先评估了标准“现成”的限制技术,发现它们在四个从Twitter中介绍美国县人口健康统计数据的四个任务中没有提供任何改进,甚至通常会退化预测准确性。降级表现的核心原因似乎与他们对每个人群社会人口统计学的稀疏或缩减估计的依赖有关。在研究的第二部分中,我们开发和评估了强大的阶段化后,该方法包括解决这些问题的三种方法:(1)估算器重新分布以说明缩小的缩小,以及(2)自适应式嵌套和(3)告知平滑为处理稀疏的社会人口统计学估计。我们表明,这些方法中的每一种都会导致预测准确性比标准限制方法显着改善。综上所述,强大的后阶段能够实现最先进的预测准确性,在调查的生活满意度的情况下,解释的方差(R^2)增加了53.0%,所有任务的平均平均值增加了17.8%。
translated by 谷歌翻译
A machine learning (ML) system must learn not only to match the output of a target function on a training set, but also to generalize to novel situations in order to yield accurate predictions at deployment. In most practical applications, the user cannot exhaustively enumerate every possible input to the model; strong generalization performance is therefore crucial to the development of ML systems which are performant and reliable enough to be deployed in the real world. While generalization is well-understood theoretically in a number of hypothesis classes, the impressive generalization performance of deep neural networks has stymied theoreticians. In deep reinforcement learning (RL), our understanding of generalization is further complicated by the conflict between generalization and stability in widely-used RL algorithms. This thesis will provide insight into generalization by studying the learning dynamics of deep neural networks in both supervised and reinforcement learning tasks.
translated by 谷歌翻译
We study the learning dynamics of self-predictive learning for reinforcement learning, a family of algorithms that learn representations by minimizing the prediction error of their own future latent representations. Despite its recent empirical success, such algorithms have an apparent defect: trivial representations (such as constants) minimize the prediction error, yet it is obviously undesirable to converge to such solutions. Our central insight is that careful designs of the optimization dynamics are critical to learning meaningful representations. We identify that a faster paced optimization of the predictor and semi-gradient updates on the representation, are crucial to preventing the representation collapse. Then in an idealized setup, we show self-predictive learning dynamics carries out spectral decomposition on the state transition matrix, effectively capturing information of the transition dynamics. Building on the theoretical insights, we propose bidirectional self-predictive learning, a novel self-predictive algorithm that learns two representations simultaneously. We examine the robustness of our theoretical insights with a number of small-scale experiments and showcase the promise of the novel representation learning algorithm with large-scale experiments.
translated by 谷歌翻译
The extragradient method has recently gained increasing attention, due to its convergence behavior on smooth games. In $n$-player differentiable games, the eigenvalues of the Jacobian of the vector field are distributed on the complex plane, exhibiting more convoluted dynamics compared to classical (i.e., single player) minimization. In this work, we take a polynomial-based analysis of the extragradient with momentum for optimizing games with \emph{cross-shaped} Jacobian spectrum on the complex plane. We show two results. First, based on the hyperparameter setup, the extragradient with momentum exhibits three different modes of convergence: when the eigenvalues are distributed $i)$ on the real line, $ii)$ both on the real line along with complex conjugates, and $iii)$ only as complex conjugates. Then, we focus on the case $ii)$, i.e., when the eigenvalues of the Jacobian have \emph{cross-shaped} structure, as observed in training generative adversarial networks. For this problem class, we derive the optimal hyperparameters of the momentum extragradient method, and show that it achieves an accelerated convergence rate.
translated by 谷歌翻译
由于其学习能力和模仿复杂的数据分布的能力,深层生成机器学习模型(DGM)在整个设计社区的流行一直在越来越受欢迎。 DGM经过常规培训,以最大程度地减少分布与生成数据的分布与对其训练的数据集的分布之间的统计差异。尽管足以生成“现实”的假数据的任务,但该目标通常不足以设计综合任务。相反,设计问题通常要求遵守设计要求,例如性能目标和约束。在工程设计中推进DGM需要新的培训目标,以促进工程设计目标。在本文中,我们介绍了第一个同时优化性能,可行性,多样性和目标成就的深层生成模型。我们在八个评估指标上针对几个深层生成模型的拟议方法的性能进行了基准性能,这些模型着重于设计性能目标的可行性,多样性和满意度。在具有挑战性的多目标自行车框架设计问题上测试了方法,并具有偏斜的不同数据类型的多模式数据。在八个指标中的六个指标中,提出的框架被发现胜过所有深层生成模型。
translated by 谷歌翻译
This paper demonstrates how Automated Machine Learning (AutoML) methods can be used as effective surrogate models in engineering design problems. To do so, we consider the challenging problem of structurally-performant bicycle frame design and demonstrate across-the-board dominance by AutoML in regression and classification surrogate modeling tasks. We also introduce FRAMED -- a parametric dataset of 4500 bicycle frames based on bicycles designed by practitioners and enthusiasts worldwide. Accompanying these frame designs, we provide ten structural performance values such as weight, displacements under load, and safety factors computed using finite element simulations for all the bicycle frame designs. We formulate two challenging test problems: a performance-prediction regression problem and a feasibility-prediction classification problem. We then systematically search for optimal surrogate models using Bayesian hyperparameter tuning and neural architecture search. Finally, we show how a state-of-the-art AutoML method can be effective for both regression and classification problems. We demonstrate that the proposed AutoML models outperform the strongest gradient boosting and neural network surrogates identified through Bayesian optimization by an improved F1 score of 24\% for classification and reduced mean absolute error by 12.5\% for regression. Our work introduces a dataset for bicycle design practitioners, provides two benchmark problems for surrogate modeling researchers, and demonstrates the advantages of AutoML in machine learning tasks. The dataset and code are provided at \url{http://decode.mit.edu/projects/framed/}.
translated by 谷歌翻译
神经结构搜索(NAS)的成功受到过度计算要求的限制。虽然现代重量共享NAS方法,例如飞镖在单位数GPU天中可以完成搜索,但从共享权重中提取最终的最佳架构是众所周知的不可靠性。培训 - 速度估计(TSE),最近开发的普遍开发的普遍估计,以贝叶斯边缘似然解释的用来代替飞镖基于梯度优化的验证损失。这可以防止飞镖跳过连接崩溃,这显着提高了NASBench-201和原始飞镖搜索空间的性能。我们通过应用各种飞镖诊断来扩展这些结果,并显示不使用验证集产生的几种不寻常的行为。此外,我们的实验产生了在与操作选择相比,尽管通常在文献中受到有限的关注,但仍会产生对搜索性能的强烈影响的深度间隙和拓扑选择的具体示例。
translated by 谷歌翻译
随机梯度下降血液(SGDM)是许多优化方案中的主要算法,包括凸优化实例和非凸神经网络训练。然而,在随机设置中,动量会干扰梯度噪声,通常导致特定的台阶尺寸和动量选择,以便保证收敛,留出加速。另一方面,近端点方法由于其数值稳定性和针对不完美调谐的弹性而产生了很多关注。他们随机加速的变体虽然已接受有限的注意:动量与(随机)近端点的稳定性相互作用仍然在很大程度上是不孤立的。为了解决这个问题,我们专注于随机近端点算法的动量(SPPAM)的收敛性和稳定性,并显示SPPAM与随机近端点算法(SPPA)相比具有更好的收缩因子的更快的线性收敛速度,如适当的HyperParameter调整。在稳定性方面,我们表明SPPAM取决于问题常数比SGDM更有利,允许更广泛的步长和导致收敛的动量。
translated by 谷歌翻译